Fast rates with high probability in exp-concave statistical learning

نویسنده

  • Nishant Mehta
چکیده

We present an algorithm for the statistical learning setting with a bounded expconcave loss in d dimensions that obtains excess risk O(d log(1/δ)/n) with probability 1−δ. The core technique is to boost the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present high probability bounds for the expconcave model selection aggregation problem that are quantile-adaptive in a certain sense. One bound obtains a nearly optimal rate without requiring the loss to be Lipschitz continuous, and another requires Lipschitz continuity but obtains the optimal rate.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast rates with high probability in exp-concave statistical learning A Proofs for Stochastic Exp-Concave Optimization

This condition is equivalent to stochastic mixability as well as the pseudoprobability convexity (PPC) condition, both defined by Van Erven et al. (2015). To be precise, for stochastic mixability, in Definition 4.1 of Van Erven et al. (2015), take their Fd and F both equal to our F , their P equal to {P}, and ψ(f) = f∗; then strong stochastic mixability holds. Likewise, for the PPC condition, i...

متن کامل

Dimension-free Information Concentration via Exp-Concavity

Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are exp-concave, which i...

متن کامل

Open Problem: Fast Stochastic Exp-Concave Optimization

Stochastic exp-concave optimization is an important primitive in machine learning that captures several fundamental problems, including linear regression, logistic regression and more. The exp-concavity property allows for fast convergence rates, as compared to general stochastic optimization. However, current algorithms that attain such rates scale poorly with the dimension n and run in time O...

متن کامل

From exp-concavity to variance control: High probability O(1/n) rates and high probability online-to-batch conversion

We present an algorithm for the statistical learning setting with a bounded exp-concave loss in d dimensions that obtains excess risk O(d/n) with high probability: the dependence on the confidence parameter δ is polylogarithmic in 1/δ. The core technique is to boost the confidence of recent O(d/n) bounds, without sacrificing the rate, by leveraging a Bernstein-type condition which holds due to ...

متن کامل

A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer

In this paper, we present a simple analysis of fast rates with high probability of empirical minimization for stochastic composite optimization over a finite-dimensional bounded convex set with exponentially concave loss functions and an arbitrary convex regularization. To the best of our knowledge, this result is the first of its kind. As a byproduct, we can directly obtain the fast rate with ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017